Skip to content

[volume-5] 인덱스와 캐시를 사용한 성능 최적화 #196

Open
hyejin0810 wants to merge 4 commits intoLoopers-dev-lab:hyejin0810from
hyejin0810:week5
Open

[volume-5] 인덱스와 캐시를 사용한 성능 최적화 #196
hyejin0810 wants to merge 4 commits intoLoopers-dev-lab:hyejin0810from
hyejin0810:week5

Conversation

@hyejin0810
Copy link

@hyejin0810 hyejin0810 commented Mar 12, 2026

📌 Summary

  • 배경: 상품 목록 조회 시 풀 테이블 스캔(type=ALL)이 발생하고 있었고, 매 요청마다 DB를 직접 조회하는 구조였다.
  • 목표: 인덱스 추가로 스캔 범위를 줄이고, Redis Cache-Aside 패턴으로 반복 조회 비용을 제거한다.
  • 결과: 상품 상세/목록 조회에 캐시 적용, 좋아요·수정·삭제 시 캐시 무효화로 정합성 유지.

🧭 Context & Decision

문제 정의

  • 현재 동작: 상품 목록 조회 시 brand_id 필터 + 정렬 조건이 있음에도 인덱스가 없어 풀 스캔 발생
  • 문제: 10만건 이상 데이터에서 매 요청마다 전체 스캔 + filesort 수행
  • 성공 기준: 인덱스로 스캔 범위 축소, 캐시 히트 시 DB 쿼리 없이 응답

선택지와 결정

인덱스 설계: deleted_at 포함 여부

  • A: (brand_id, likes_count, deleted_at) — deleted_at 포함
  • B: (brand_id, likes_count) — deleted_at 제외
  • 최종 결정: B
  • 이유: deleted_at IS NULL 통과율이 100%에 가까워 카디널리티가 1이므로 인덱스 효과 없음. 컬럼 추가 시 인덱스 크기만 증가.

캐시 무효화: TTL만 vs 즉시 삭제

  • A: TTL만 신뢰
  • B: 변경 이벤트 발생 시 즉시 캐시 삭제
  • 최종 결정: B
  • 이유: 좋아요 수, 가격 등 변경 즉시 반영이 필요한 데이터는 TTL만으로는 stale 위험이 있음
  • 트레이드오프: 무효화 누락 시 최대 TTL(10분/5분)까지 stale 가능. TTL이 최종 안전망 역할.

🏗️ Design Overview

변경 범위

  • 영향 받는 모듈/도메인: product, like
  • 신규 추가: ProductCacheService, scripts/add-indexes.sql, scripts/seed-data.sql
  • 제거/대체: 없음

주요 컴포넌트 책임

  • ProductCacheService: Redis 상세/목록 캐시 CRUD. Master(쓰기) / Replica(읽기) 분리. 장애 시 Optional.empty() 반환으로 DB 폴백.
  • ProductFacade: Cache-Aside 패턴 적용. Miss 시 DB 조회 후 캐시 저장.
  • LikeFacade: 좋아요 등록/취소 시 상세 캐시 + 목록 캐시 전체 무효화.

인덱스 3개
CREATE INDEX idx_products_brand_likes ON products (brand_id, likes_count);
CREATE INDEX idx_products_brand_price ON products (brand_id, price);
CREATE INDEX idx_products_created_at ON products (created_at);

📑 Cache Strategy

기능 캐시 키 패턴 TTL
상세 product:detail:{id} 10분
목록 product:list:{brandId}:{sort}:{page}:{size} 5분

🔁 Flow Diagram

1. 상품 목록 조회 (Cache-Aside)

sequenceDiagram
    autonumber
    participant Client
    participant ProductFacade
    participant ProductCacheService
    participant DB as MySQL

    Client->>ProductFacade: getProducts(brandId, sort, pageable)
    ProductFacade->>ProductCacheService: getList(brandId, sort, page, size)

    alt 캐시 HIT
        ProductCacheService-->>ProductFacade: Page<ProductInfo>
        ProductFacade-->>Client: 200 OK (캐시 응답)
    else 캐시 MISS
        ProductCacheService-->>ProductFacade: Optional.empty()
        ProductFacade->>DB: findProducts(brandId, sortedPageable)
        DB-->>ProductFacade: Page<Product>
        ProductFacade->>ProductCacheService: setList(brandId, sort, result)
        ProductFacade-->>Client: 200 OK (DB 응답)
    end

Loading
  1. 캐시 무효화 (좋아요 등록/취소)
sequenceDiagram
    autonumber
    participant Client
    participant LikeFacade
    participant DB as MySQL
    participant ProductCacheService

    Client->>LikeFacade: addLike(loginId, productId)
    LikeFacade->>DB: addLike / increaseLikesCount
    
    rect rgb(255, 241, 241)
        Note right of LikeFacade: 데이터 일관성 유지 (Evict)
        LikeFacade->>ProductCacheService: delete(productId)
        LikeFacade->>ProductCacheService: deleteListAll()
    end
    
    LikeFacade-->>Client: 200 OK
Loading

변경 목적: 상품 목록 조회 시 발생하던 MySQL 풀 테이블 스캔(type=ALL)과 매 요청 DB 조회 비용을 줄이기 위해 인덱스 추가(brand_id+likes_count, brand_id+price, created_at 등)와 Redis 기반 Cache-Aside 패턴을 도입하여 응답 성능을 개선하고 DB 부하를 경감합니다.
핵심 변경점: ProductCacheService 신규 추가(상세 TTL 10분, 목록 TTL 5분, Master/Replica 분리, 장애 시 Optional.empty()로 DB 폴백, 목록 캐시는 ProductListCache 직렬화/역직렬화); ProductFacade/LikeFacade에 캐시 조회·채우기 및 생성/수정/삭제/좋아요 변경 시 상세키 삭제 + 목록 캐시 전체 무효화(deleteListAll) 호출; scripts/add-indexes.sql·scripts/seed-data.sql 추가.
리스크/주의사항: 목록 무효화가 SCAN 기반 전체 키 삭제를 사용하므로 대규모 키 공간에서 삭제 비용·지연·동시성 문제가 발생할 수 있고, Redis 마스터/레플리카 또는 네트워크 장애 시 폴백 빈도 증가로 일시적 성능 저하가 발생할 수 있음; deleted_at을 인덱스에서 제외한 결정은 카디널리티 기반이지만 특정 쿼리 패턴에서 재검토가 필요할 수 있음.
테스트/검증 방법: scripts/seed-data.sql로 대량 데이터 적재 후 EXPLAIN으로 인덱스 사용 여부 및 filesort 제거 확인, 인덱스 적용 전후 응답시간 비교, 캐시 히트/미스 비율·Redis INFO 모니터링, 좋아요 동시성 시나리오에서 캐시 무효화 후 최종 likes_count 일관성 검증.
확인 질문: 운영 환경에서 Redis를 단일 마스터/레플리카 구성으로 운영하시는지(또는 클러스터 모드인지), 그리고 대량 키 환경에서 목록 캐시 무효화를 위해 SCAN 기반 삭제의 호출 빈도와 배치 전략(예: 점진 삭제 또는 키 패턴 분할)을 어떻게 운영할지 지정되어 있나요?

hyejin0810 and others added 2 commits March 10, 2026 23:33
- products 테이블에 복합 인덱스 3개 추가
  - idx_products_brand_likes (brand_id, likes_count): 브랜드 필터 + 좋아요 정렬
  - idx_products_brand_price (brand_id, price): 브랜드 필터 + 가격 정렬
  - idx_products_created_at (created_at): 최신순 정렬
- ProductCacheService 추가: 상품 상세 Redis 캐시 (TTL 10분)
- ProductFacade: 상품 상세 조회 시 캐시 적용 및 수정/삭제 시 무효화
- LikeFacade: 좋아요 등록/취소 시 상품 캐시 무효화
- OrderFacade: import 누락 수정
- jpa.yml: ddl-auto create → validate 변경
- scripts/seed-data.sql, add-indexes.sql 추가

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@coderabbitai
Copy link

coderabbitai bot commented Mar 12, 2026

📝 Walkthrough

Walkthrough

Redis 기반 ProductCacheService를 추가하고 ProductFacade/LikeFacade에 캐시 조회·적재 및 무효화 로직을 통합했다. 목록/상세 캐시 TTL과 일괄 목록 삭제 구현, 관련 DB 인덱스 추가 스크립트 및 대용량 시드 데이터 스크립트가 포함되었다.

Changes

Cohort / File(s) Summary
캐싱 인프라 추가
apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheService.java
Redis 기반 제품 상세(10분 TTL) 및 목록(5분 TTL) 캐시 서비스 추가. get/set/delete/getList/setList/deleteListAll 메서드와 직렬화/예외 완화 로직 포함.
상품 도메인 파사드 변경
apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java
ProductCacheService 주입 및 getProductDetail/getProducts 경로에 캐시 우선 로직 추가. 생성/수정/삭제 후 detail 및 list 캐시 무효화 호출 추가.
좋아요 파사드 변경
apps/commerce-api/src/main/java/com/loopers/application/like/LikeFacade.java
ProductCacheService 주입 필드 추가. addLike/removeLike 수행 후 관련 상품 상세 캐시 삭제 및 목록 캐시 일괄 삭제 호출 추가.
작은 임포트 변경
apps/commerce-api/src/main/java/com/loopers/application/order/OrderFacade.java
ObjectOptimisticLockingFailureException의 명시적 import 경로를 org.springframework.orm 패키지로 변경. 로직 불변.
DB 인프라 및 시드 추가
scripts/add-indexes.sql, scripts/seed-data.sql
상품 목록 성능을 위한 인덱스 생성 스크립트와 20개 브랜드 및 100,000개 제품을 대량 삽입하는 시드 스크립트 추가.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant ProductFacade
    participant ProductCacheService
    participant Redis
    participant Database

    rect rgba(100, 200, 150, 0.5)
    Note over Client,Database: getProductDetail - 캐시 히트 시나리오
    Client->>ProductFacade: getProductDetail(id)
    ProductFacade->>ProductCacheService: get(id)
    ProductCacheService->>Redis: GET product:detail:{id}
    Redis-->>ProductCacheService: JSON ProductInfo
    ProductCacheService-->>ProductFacade: Optional<ProductInfo>
    ProductFacade-->>Client: ProductInfo (캐시 응답)
    end
Loading
sequenceDiagram
    participant Client
    participant ProductFacade
    participant ProductCacheService
    participant Redis
    participant Database

    rect rgba(200, 150, 100, 0.5)
    Note over Client,Database: getProductDetail - 캐시 미스 시나리오
    Client->>ProductFacade: getProductDetail(id)
    ProductFacade->>ProductCacheService: get(id)
    ProductCacheService->>Redis: GET product:detail:{id}
    Redis-->>ProductCacheService: (nil)
    ProductCacheService->>Database: SELECT product by id
    Database-->>ProductCacheService: Product entity
    ProductCacheService->>Redis: SET product:detail:{id} (TTL 10m)
    ProductCacheService-->>ProductFacade: Optional<ProductInfo>
    ProductFacade-->>Client: ProductInfo (DB 응답)
    end
Loading
sequenceDiagram
    participant Client
    participant ProductFacade
    participant ProductCacheService
    participant Redis
    participant Database

    rect rgba(150, 150, 200, 0.5)
    Note over Client,Database: 상품 변경 또는 좋아요 변경 후 캐시 무효화
    Client->>ProductFacade: updateProduct / deleteProduct / addLike / removeLike
    ProductFacade->>Database: UPDATE/DELETE or Likes 변경
    Database-->>ProductFacade: 성공 응답
    ProductFacade->>ProductCacheService: delete(id)
    ProductCacheService->>Redis: DEL product:detail:{id}
    ProductFacade->>ProductCacheService: deleteListAll()
    ProductCacheService->>Redis: SCAN product:list:* -> DEL (batched)
    ProductFacade-->>Client: 응답
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45분

Possibly related PRs

  • #188: Redis 기반 제품 캐시 및 캐시 무효화 로직을 도입하는 PR로, 동일한 캐시 구현/키 규약과 충돌 또는 중복 변경 가능성이 있다.
  • #185: LikeFacade 관련 변경(좋아요 처리)에 대한 PR로, 좋아요 변경 시 캐시 무효화 호출과 상호작용 가능성이 있다.
  • #103: ProductFacade와 LikeFacade에 ProductCacheService를 도입하는 변경과 코드 레벨로 중복되는 부분이 있어 연관 가능성이 있다.
🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 35.29% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed PR 제목이 핵심 변경사항(인덱스 추가와 Redis 캐시 적용)을 명확하게 요약하고 있다.
Description check ✅ Passed PR 설명이 문제 정의, 선택지와 결정, 설계 개요, 캐시 전략, 플로우 다이어그램을 포함하여 템플릿 구조를 충실히 따르고 있다.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

CodeRabbit can generate a title for your PR based on the changes with custom instructions.

Set the reviews.auto_title_instructions setting to generate a title for your PR based on the changes in the PR with custom instructions.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
apps/commerce-api/src/main/java/com/loopers/application/like/LikeFacade.java (1)

45-60: ⚠️ Potential issue | 🟠 Major

캐시 무효화는 트랜잭션 커밋 후에 실행해야 한다

지금은 DB 변경이 아직 커밋되지 않은 시점에 Redis 키를 먼저 지운다. 이 경우 다른 요청이 그 사이에 캐시 미스를 보고 이전 DB 상태로 캐시를 다시 채우면, 좋아요 수가 반영되지 않은 상세/목록 캐시가 TTL까지 남을 수 있다. TransactionSynchronizationManagerafterCommit 훅이나 @TransactionalEventListener(phase = AFTER_COMMIT)로 무효화를 옮기는 편이 안전하다. 동시성 통합 테스트로 “트랜잭션 A가 좋아요 변경 후 커밋 전 대기 → 트랜잭션 B가 조회로 캐시 재생성 → A 커밋” 시나리오를 검증해 최종 캐시가 최신 likes_count를 가지는지 확인하는 것이 좋다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/commerce-api/src/main/java/com/loopers/application/like/LikeFacade.java`
around lines 45 - 60, The cache invalidation calls (productCacheService.delete
and productCacheService.deleteListAll) in addLike and removeLike run inside the
transaction and must be executed after commit; change the implementation to
register an after-commit hook (e.g.,
TransactionSynchronizationManager.registerSynchronization(...).afterCommit or
use an `@TransactionalEventListener`(phase = AFTER_COMMIT) event) so that
likeService.addLike/removeLike and
productService.increaseLikesCount/decreaseLikesCount execute inside the
transaction and the productCacheService.delete/productCacheService.deleteListAll
calls are invoked only after successful commit; ensure you reference
addLike/removeLike methods and move cache deletion logic into the after-commit
callback or event listener accordingly.
🧹 Nitpick comments (1)
apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java (1)

55-67: 캐시 키에는 정규화된 sort 값을 써야 한다

여기서는 캐시 키 생성에 원본 sort 문자열을 그대로 넘기지만, 실제 DB 조회는 resolveSort(sort)로 정규화된 정렬을 사용한다. 그 결과 null, 빈 문자열, 미지원 값처럼 결국 같은 기본 정렬로 떨어지는 요청들이 서로 다른 Redis 키를 만들어 메모리를 불필요하게 쓰고, 전체 무효화 비용도 같이 키운다. 허용된 정렬 값을 먼저 정규화한 뒤 그 값을 캐시 키와 DB 조회에 공통으로 쓰는 편이 낫다. null/미지원 sort 요청이 동일한 캐시 키로 수렴하는지 확인하는 테스트도 추가하는 것이 좋다.

Also applies to: 82-83

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java`
around lines 55 - 67, Normalize the incoming sort value once and use that
normalized value for both the cache key lookup and the DB query: call
resolveSort(sort) (or a new helper normalizeSort) to produce a canonical sort
string (e.g., mapping null/empty/unsupported to the default) and pass that
normalizedSort into productCacheService.getList(...) and into
PageRequest.of(...); update getProducts (and the analogous code around the later
block referenced at 82-83) to use the same normalizedSort for cache and
PageRequest to avoid divergent Redis keys.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java`:
- Around line 31-37: The cache invalidation
(productCacheService.deleteListAll()) is happening inside the transactional
method createProduct (and similarly in the update/delete methods around lines
87-101), causing a race where other requests can repopulate stale cache before
the DB commit; move the cache deletion to run after commit by registering a
post-commit callback (e.g., using
TransactionSynchronizationManager.registerSynchronization or an afterCommit
event listener) so that productCacheService.deleteListAll() executes only after
the transaction successfully commits and ensure the same change for the
corresponding update/delete methods.

In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheService.java`:
- Around line 115-121: deleteListAll currently calls
masterRedisTemplate.keys(LIST_KEY_PREFIX + "*") which uses Redis KEYS (O(N)) and
will block under write load; replace this with a non-blocking strategy:
preferred — implement an O(1) versioning scheme by adding a LIST_VERSION_KEY
(e.g., "product:list:version") and update cache key composition to include the
version (so ProductCacheService and any code that reads/writes product list
cache incorporate the version), then change deleteListAll() to simply increment
the LIST_VERSION_KEY via masterRedisTemplate.increment(...) to invalidate all
list entries; alternatively, if you cannot change key composition now, replace
the KEYS call with a SCAN-based batched delete using Redis SCAN cursor and
deleting retrieved key batches via masterRedisTemplate.delete(batch) to avoid
blocking, and add a load test for thousands of keys to ensure no CPU spikes.
- Around line 32-43: ProductCacheService currently reads with
defaultRedisTemplate (configured REPLICA_PREFERRED) while writes use
masterRedisTemplate, which can return stale values after a write; change the
cache read paths in ProductCacheService to use masterRedisTemplate for
operations that require immediate consistency (e.g.,
getProductById/getProductList) or implement a read-after-write pattern using a
versioned namespace/key (increment a version on write and read using that
version) to guarantee fresh reads; update references to
defaultRedisTemplate/masterRedisTemplate in the class and ensure
RedisConfig.REDIS_TEMPLATE_MASTER qualifier is used for reads, and add an
integration test that updates a product then immediately reads it to assert no
stale value is returned.

In `@scripts/add-indexes.sql`:
- Around line 20-22: Add a composite index for brand + createdAt so
brand-filtered queries sorted by createdAt can use an index: create an index on
(brand_id, created_at) in scripts/add-indexes.sql, then run EXPLAIN on the
brand-filter + createdAt DESC page query used by ProductFacade (default sort
createdAt DESC) and ProductRepositoryImpl (brand filter logic) to verify the
planner uses the new idx; ensure the index name is descriptive (e.g.,
idx_products_brand_created_at) and confirm EXPLAIN shows index usage rather than
filesort/extra scan.

In `@scripts/seed-data.sql`:
- Around line 70-71: 생성 스크립트가 created_at과 updated_at을 독립적으로 랜덤 생성해(updated_at이
created_at보다 과거가 될 수 있음) 문제이니, 먼저 created_at을 생성(예: DATE_SUB(NOW(), INTERVAL
FLOOR(RAND()*365) DAY) using created_at)한 다음 updated_at을 created_at보다 같거나 최신이
되도록 파생시키세요 (예: updated_at := created_at + INTERVAL FLOOR(RAND()*X) DAY 또는
GREATEST(created_at, DATE_SUB(...))). 적용 대상 식은 현재 파일의 DATE_SUB(NOW(), INTERVAL
FLOOR(RAND() * 365) DAY) 표현을 사용하는 삽입/VALUES 로직과 관련 있으며, 또한 데이터 적재 후 검증 쿼리(SELECT
COUNT(*) FROM ... WHERE updated_at < created_at)를 추가해 위 조건을 위반하는 레코드가 0인지 확인하도록
하세요.

---

Outside diff comments:
In
`@apps/commerce-api/src/main/java/com/loopers/application/like/LikeFacade.java`:
- Around line 45-60: The cache invalidation calls (productCacheService.delete
and productCacheService.deleteListAll) in addLike and removeLike run inside the
transaction and must be executed after commit; change the implementation to
register an after-commit hook (e.g.,
TransactionSynchronizationManager.registerSynchronization(...).afterCommit or
use an `@TransactionalEventListener`(phase = AFTER_COMMIT) event) so that
likeService.addLike/removeLike and
productService.increaseLikesCount/decreaseLikesCount execute inside the
transaction and the productCacheService.delete/productCacheService.deleteListAll
calls are invoked only after successful commit; ensure you reference
addLike/removeLike methods and move cache deletion logic into the after-commit
callback or event listener accordingly.

---

Nitpick comments:
In
`@apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java`:
- Around line 55-67: Normalize the incoming sort value once and use that
normalized value for both the cache key lookup and the DB query: call
resolveSort(sort) (or a new helper normalizeSort) to produce a canonical sort
string (e.g., mapping null/empty/unsupported to the default) and pass that
normalizedSort into productCacheService.getList(...) and into
PageRequest.of(...); update getProducts (and the analogous code around the later
block referenced at 82-83) to use the same normalizedSort for cache and
PageRequest to avoid divergent Redis keys.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: bc7bd6f2-7968-436f-9077-82561e5bac63

📥 Commits

Reviewing files that changed from the base of the PR and between 5a1bc28 and 60a1638.

📒 Files selected for processing (6)
  • apps/commerce-api/src/main/java/com/loopers/application/like/LikeFacade.java
  • apps/commerce-api/src/main/java/com/loopers/application/order/OrderFacade.java
  • apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java
  • apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheService.java
  • scripts/add-indexes.sql
  • scripts/seed-data.sql

Comment on lines 31 to 37
@Transactional
public ProductInfo createProduct(Long brandId, String name, Integer price, Integer stock,
String description, String imageUrl) {
Brand brand = brandService.getBrand(brandId);
Product product = productService.createProduct(brandId, name, price, stock, description, imageUrl);
productCacheService.deleteListAll();
return ProductInfo.from(product, brand);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

상품 변경 캐시 무효화도 커밋 이후로 미뤄야 한다

생성/수정/삭제 모두 같은 트랜잭션 안에서 Redis 키를 먼저 지우고 있다. 이 순서에서는 커밋 전에 다른 요청이 이전 DB 상태를 읽어 캐시를 다시 채우는 경쟁 조건이 생겨, 방금 반영한 상품 정보가 오래된 캐시로 덮일 수 있다. 무효화는 afterCommit 훅이나 커밋 후 이벤트 리스너로 옮기는 편이 안전하다. 추가로 생성/수정/삭제 각각에 대해 “커밋 전 조회가 캐시를 재생성하더라도 커밋 후 최종 캐시는 최신 상태”인지 보는 동시성 통합 테스트를 두는 것이 좋다.

Also applies to: 87-101

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java`
around lines 31 - 37, The cache invalidation
(productCacheService.deleteListAll()) is happening inside the transactional
method createProduct (and similarly in the update/delete methods around lines
87-101), causing a race where other requests can repopulate stale cache before
the DB commit; move the cache deletion to run after commit by registering a
post-commit callback (e.g., using
TransactionSynchronizationManager.registerSynchronization or an afterCommit
event listener) so that productCacheService.deleteListAll() executes only after
the transaction successfully commits and ensure the same change for the
corresponding update/delete methods.

Comment on lines +32 to +43
private final RedisTemplate<String, String> defaultRedisTemplate; // 읽기 (Replica)
private final RedisTemplate<String, String> masterRedisTemplate; // 쓰기 (Master)
private final ObjectMapper objectMapper;

public ProductCacheService(
RedisTemplate<String, String> defaultRedisTemplate,
@Qualifier(RedisConfig.REDIS_TEMPLATE_MASTER) RedisTemplate<String, String> masterRedisTemplate,
ObjectMapper objectMapper
) {
this.defaultRedisTemplate = defaultRedisTemplate;
this.masterRedisTemplate = masterRedisTemplate;
this.objectMapper = objectMapper;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Replica 읽기로 즉시 무효화 보장이 깨진다

관련 설정인 modules/redis/src/main/java/com/loopers/config/redis/RedisConfig.java:19-69를 보면 기본 템플릿은 REPLICA_PREFERRED로 읽고, 이 클래스는 쓰기/삭제만 master로 보낸다. 이 조합은 복제 지연 동안 삭제된 키나 갱신 전 값을 replica가 계속 반환할 수 있어서, 방금 수정한 상품 상세/목록이 다시 오래된 캐시를 서빙하는 문제가 생긴다. 즉시 일관성이 필요한 상품 캐시는 읽기도 master로 통일하거나, 최소한 버전 키/namespace 방식처럼 read-after-write를 보장하는 구조로 바꾸는 편이 안전하다. 추가로 갱신 직후 즉시 조회했을 때 stale value가 반환되지 않는 통합 테스트를 두는 것이 좋다.

Also applies to: 47-57, 80-95

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheService.java`
around lines 32 - 43, ProductCacheService currently reads with
defaultRedisTemplate (configured REPLICA_PREFERRED) while writes use
masterRedisTemplate, which can return stale values after a write; change the
cache read paths in ProductCacheService to use masterRedisTemplate for
operations that require immediate consistency (e.g.,
getProductById/getProductList) or implement a read-after-write pattern using a
versioned namespace/key (increment a version on write and read using that
version) to guarantee fresh reads; update references to
defaultRedisTemplate/masterRedisTemplate in the class and ensure
RedisConfig.REDIS_TEMPLATE_MASTER qualifier is used for reads, and add an
integration test that updates a product then immediately reads it to assert no
stale value is returned.

Comment on lines +70 to +71
DATE_SUB(NOW(), INTERVAL FLOOR(RAND() * 365) DAY),
DATE_SUB(NOW(), INTERVAL FLOOR(RAND() * 365) DAY),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

updated_atcreated_at보다 과거가 될 수 있다

현재 두 컬럼을 독립적으로 랜덤 생성해서 시간 역전 데이터가 만들어질 수 있다. 이런 샘플은 정렬·감사·변경 이력 관련 검증을 왜곡하므로 created_at을 먼저 계산한 뒤 updated_at >= created_at이 되도록 파생시키는 편이 안전하다. 적재 후에는 updated_at < created_at 건수가 0인지 확인하는 검증 쿼리도 같이 추가하는 것이 좋다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/seed-data.sql` around lines 70 - 71, 생성 스크립트가 created_at과 updated_at을
독립적으로 랜덤 생성해(updated_at이 created_at보다 과거가 될 수 있음) 문제이니, 먼저 created_at을 생성(예:
DATE_SUB(NOW(), INTERVAL FLOOR(RAND()*365) DAY) using created_at)한 다음
updated_at을 created_at보다 같거나 최신이 되도록 파생시키세요 (예: updated_at := created_at +
INTERVAL FLOOR(RAND()*X) DAY 또는 GREATEST(created_at, DATE_SUB(...))). 적용 대상 식은
현재 파일의 DATE_SUB(NOW(), INTERVAL FLOOR(RAND() * 365) DAY) 표현을 사용하는 삽입/VALUES 로직과
관련 있으며, 또한 데이터 적재 후 검증 쿼리(SELECT COUNT(*) FROM ... WHERE updated_at <
created_at)를 추가해 위 조건을 위반하는 레코드가 0인지 확인하도록 하세요.

조혜진 and others added 2 commits March 13, 2026 14:13
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheService.java (1)

35-37: ⚠️ Potential issue | 🟠 Major

마스터/리플리카 분리 읽기로 무효화 직후 stale 캐시가 다시 노출된다

운영 관점에서 복제 지연 구간에 삭제 전 값이 다시 조회되어 즉시 무효화 목표가 깨지고, 잘못된 값을 재캐싱할 수 있어 장애성 데이터 불일치로 이어진다. get()/getList()는 즉시 일관성이 필요한 경로이므로 마스터 읽기로 통일하거나, 목록/상세 키에 버전 네임스페이스를 포함해 read-after-write를 보장하는 방식으로 바꾸는 것이 안전하다. 추가로 “변경 직후 즉시 조회 시 stale 미반환” 통합 테스트를 넣어야 한다.

수정 예시
- String json = defaultRedisTemplate.opsForValue().get(KEY_PREFIX + productId);
+ String json = masterRedisTemplate.opsForValue().get(KEY_PREFIX + productId);

- String json = defaultRedisTemplate.opsForValue().get(listKey(brandId, sort, pageNumber, pageSize));
+ String json = masterRedisTemplate.opsForValue().get(listKey(brandId, sort, pageNumber, pageSize));
#!/bin/bash
# Redis 읽기 라우팅/읽기 선호 설정 확인
rg -n -C3 'REPLICA|ReadFrom|REDIS_TEMPLATE_MASTER|@Primary|RedisTemplate<.*String, String>' modules apps --type=java

# ProductCacheService 읽기 경로 확인
rg -n -C2 'defaultRedisTemplate\.opsForValue\(\)\.get|masterRedisTemplate\.opsForValue\(\)\.get' \
  apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheService.java

Also applies to: 50-53, 83-86

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheService.java`
around lines 35 - 37, ProductCacheService currently reads from replica via
defaultRedisTemplate in get()/getList(), which can return stale data immediately
after an invalidate; change those reads to use masterRedisTemplate (or otherwise
include a versioned namespace in keys for get()/getList() to guarantee
read-after-write) so invalidation is effective, update all occurrences of
defaultRedisTemplate.opsForValue().get in ProductCacheService to
masterRedisTemplate.opsForValue().get (or switch key composition to include a
version token for list/detail keys), and add an integration test that
deletes/updates a cache entry then immediately reads to assert stale values are
not returned.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheService.java`:
- Around line 57-59: Update the catch blocks in ProductCacheService so the
exception object is logged instead of only e.getMessage(): replace log.warn("...
error: {}", ..., e.getMessage()) with a log.warn("... error while accessing
cache for {}: ", identifier, e) (for all six occurrences handling individual
product and list get/save/delete) to preserve the cause/stacktrace; then add
unit tests that simulate JsonMappingException and a Redis connection failure to
assert the methods return Optional.empty() and that the log contains the
exception stacktrace (i.e., verify logging captured the throwable) for the
methods involved (ProductCacheService's product/list get/save/delete handlers).
- Around line 121-129: The scan usage in ProductCacheService (ScanOptions +
masterRedisTemplate.scan(...)) leaks Redis connections because the Cursor isn't
closed and it loads all keys into memory before deleting; change deleteListAll()
to wrap the Cursor returned by masterRedisTemplate.scan(options) in a
try-with-resources to ensure closure, accumulate keys into a small fixed-size
batch (e.g., 1000) and call masterRedisTemplate.delete(batch) repeatedly rather
than collecting all keys into one List, and add a load test that exercises
deleteListAll() with large key counts (e.g., 100k) to measure memory and
latency.

---

Duplicate comments:
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheService.java`:
- Around line 35-37: ProductCacheService currently reads from replica via
defaultRedisTemplate in get()/getList(), which can return stale data immediately
after an invalidate; change those reads to use masterRedisTemplate (or otherwise
include a versioned namespace in keys for get()/getList() to guarantee
read-after-write) so invalidation is effective, update all occurrences of
defaultRedisTemplate.opsForValue().get in ProductCacheService to
masterRedisTemplate.opsForValue().get (or switch key composition to include a
version token for list/detail keys), and add an integration test that
deletes/updates a cache entry then immediately reads to assert stale values are
not returned.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 8c775327-4f1b-43ea-82ab-09df1b19c56d

📥 Commits

Reviewing files that changed from the base of the PR and between 60a1638 and 4724df5.

📒 Files selected for processing (1)
  • apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheService.java

Comment on lines +57 to +59
} catch (Exception e) {
log.warn("캐시 조회 실패 - productId: {}, error: {}", productId, e.getMessage());
return Optional.empty();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# message-only 예외 로깅 패턴 점검
rg -n -C1 'log\.warn\(".*error: \{\}", .*e\.getMessage\(\)\)' \
  apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheService.java

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 885


🏁 Script executed:

#!/bin/bash
# ProductCacheService의 모든 로그 패턴 확인
cat -n apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheService.java | grep -E 'log\.(warn|error|info)' 

# 파일의 전체 라인 수 확인
wc -l apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheService.java

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 759


🏁 Script executed:

#!/bin/bash
# ProductCacheService 전체 내용 확인 (구조 파악)
cat -n apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheService.java | head -150

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 6466


Redis 캐시 조회/저장 실패 로그에서 예외의 cause를 누락하여 스택트레이스 추적이 불가능하다

e.getMessage()만 로깅하면 근본 원인(직렬화 실패, 연결 타임아웃, 메모리 오류 등)의 스택트레이스가 유실되어 운영 환경에서 장애 재현과 원인 분석이 지연된다. 예외 객체를 직접 전달하여 cause 체인을 보존해야 한다.

이 패턴은 6개 위치에서 반복되므로 모두 수정해야 한다:

  • 라인 58: 개별 상품 조회
  • 라인 69: 개별 상품 저장
  • 라인 78: 개별 상품 삭제
  • 라인 97: 목록 조회
  • 라인 114: 목록 저장
  • 라인 131: 목록 전체 삭제

추가로 역직렬화 실패(JsonMappingException) 및 Redis 연결 실패를 강제로 발생시키는 단위 테스트에서 Optional.empty() 폴백과 스택트레이스 포함된 로그 메시지를 함께 검증해야 한다.

수정 예시
- log.warn("캐시 조회 실패 - productId: {}, error: {}", productId, e.getMessage());
+ log.warn("캐시 조회 실패 - productId: {}", productId, e);

- log.warn("캐시 저장 실패 - productId: {}, error: {}", productInfo.id(), e.getMessage());
+ log.warn("캐시 저장 실패 - productId: {}", productInfo.id(), e);

- log.warn("캐시 삭제 실패 - productId: {}, error: {}", productId, e.getMessage());
+ log.warn("캐시 삭제 실패 - productId: {}", productId, e);

- log.warn("목록 캐시 조회 실패 - brandId: {}, sort: {}, error: {}", brandId, sort, e.getMessage());
+ log.warn("목록 캐시 조회 실패 - brandId: {}, sort: {}", brandId, sort, e);

- log.warn("목록 캐시 저장 실패 - brandId: {}, sort: {}, error: {}", brandId, sort, e.getMessage());
+ log.warn("목록 캐시 저장 실패 - brandId: {}, sort: {}", brandId, sort, e);

- log.warn("목록 캐시 전체 삭제 실패: {}", e.getMessage());
+ log.warn("목록 캐시 전체 삭제 실패", e);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheService.java`
around lines 57 - 59, Update the catch blocks in ProductCacheService so the
exception object is logged instead of only e.getMessage(): replace log.warn("...
error: {}", ..., e.getMessage()) with a log.warn("... error while accessing
cache for {}: ", identifier, e) (for all six occurrences handling individual
product and list get/save/delete) to preserve the cause/stacktrace; then add
unit tests that simulate JsonMappingException and a Redis connection failure to
assert the methods return Optional.empty() and that the log contains the
exception stacktrace (i.e., verify logging captured the throwable) for the
methods involved (ProductCacheService's product/list get/save/delete handlers).

Comment on lines +121 to +129
ScanOptions options = ScanOptions.scanOptions()
.match(LIST_KEY_PREFIX + "*")
.count(100)
.build();
List<String> keys = new ArrayList<>();
masterRedisTemplate.scan(options).forEachRemaining(keys::add);
if (!keys.isEmpty()) {
masterRedisTemplate.delete(keys);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# scan 사용 지점 및 커서 처리 패턴 점검
rg -n -C2 '\.scan\(' --type=java
rg -n -C2 'Cursor<' --type=java

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 805


🏁 Script executed:

cat -n apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheService.java | sed -n '110,135p'

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 1252


🏁 Script executed:

rg -n 'import.*Cursor' --type=java

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 67


🌐 Web query:

Spring Data Redis Cursor AutoCloseable scan forEachRemaining memory

💡 Result:

  • In Spring Data Redis, Cursor/ScanCursor is designed to iterate incrementally: ScanCursor “continuously load[s] additional results from Redis server” as you iterate, rather than materializing the full result set up-front. [1]
  • forEachRemaining(...) is just the standard Iterator traversal; it does not inherently store everything in memoryyour consumer does (e.g., keys.add(...) collects all keys in RAM). Any memory growth is typically from what you accumulate while iterating, not from forEachRemaining itself. [2]
  • The big operational/memory pitfall is resource/connection retention if you don’t close the cursor: Spring explicitly says to close the cursor when done so implementations can clean up resources they must keep open to iterate (commonly the underlying connection), e.g., via try-with-resources. [2]
  • Also, setting a very large ScanOptions.count(...) can increase the per-roundtrip batch size returned by Redis, which can raise transient client-side memory usage per iteration (even though the scan is still incremental). [1]

Sources:
[1] Spring Data Redis ScanCursor Javadoc (current API) (docs.spring.io)
[2] Spring Data Redis Cursor Javadoc (2.7.8 API) (docs.spring.io)

Citations:


🏁 Script executed:

rg -n '\.scan\(' apps/commerce-api/src/main/java --type=java -A3

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 642


🏁 Script executed:

rg -n 'delete\(' apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheService.java --type=java -B2 -A2

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 588


🏁 Script executed:

# deleteListAll 메서드의 호출 지점 확인
rg -n 'deleteListAll' apps/commerce-api/src/main/java --type=java

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 944


🌐 Web query:

Spring Data Redis RedisTemplate scan() Cursor AutoCloseable return type

💡 Result:

RedisTemplate#scan(ScanOptions) returns a Cursor<K> (not AutoCloseable in the method signature). However, Spring Data Redis’ Cursor is meant to be closed when you’re done (to avoid resource leaks), so you should use it with try-with-resources. [1]

In practice, the returned cursor is typically a ScanCursor, which implements Closeable and AutoCloseable (among others). [2]

ScanOptions options = ScanOptions.scanOptions().match("foo:*").count(1000).build();

try (Cursor<String> c = redisTemplate.scan(options)) {
  while (c.hasNext()) {
    String key = c.next();
    // ...
  }
}

Sources: [1] [2]


🏁 Script executed:

# 프로젝트의 Spring Data Redis 버전 확인
find . -name "pom.xml" -o -name "build.gradle" | xargs grep -l "spring-data-redis" | head -1 | xargs cat | grep -A2 -B2 "spring-data-redis"

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 67


SCAN 커서 미종료와 전체 키 메모리 적재로 연결 누수 및 메모리 압박 발생

현재 구현은 다음 두 가지 운영상 문제를 야기한다:

  1. 커서 미종료: masterRedisTemplate.scan(options).forEachRemaining(keys::add)에서 반환된 Cursor를 명시적으로 닫지 않아 Redis 연결이 보유된 상태로 유지된다. Spring Data Redis 공식 문서는 Cursor를 try-with-resources로 감싸 자동 종료하도록 권고한다.
  2. 전체 키 메모리 적재: 모든 키를 keys 리스트에 수집한 후 일괄 삭제하므로, 캐시된 상품이 많을 때 힙 사용량이 급증한다. 특히 deleteListAll()이 상품 작성/수정, 좋아요 변경 등 5개 지점에서 호출되므로 성능 영향이 크다.

배치 단위로 즉시 삭제하고 try-with-resources로 커서를 안전하게 종료하도록 수정하고, 대량 키(예: 10만 개) 시나리오에서 메모리 사용량과 무효화 지연을 측정하는 부하 테스트를 추가해야 한다.

수정 예시
+import org.springframework.data.redis.core.Cursor;
 ...
-            List<String> keys = new ArrayList<>();
-            masterRedisTemplate.scan(options).forEachRemaining(keys::add);
-            if (!keys.isEmpty()) {
-                masterRedisTemplate.delete(keys);
-            }
+            List<String> batch = new ArrayList<>(500);
+            try (Cursor<String> cursor = masterRedisTemplate.scan(options)) {
+                while (cursor.hasNext()) {
+                    batch.add(cursor.next());
+                    if (batch.size() == 500) {
+                        masterRedisTemplate.delete(batch);
+                        batch.clear();
+                    }
+                }
+            }
+            if (!batch.isEmpty()) {
+                masterRedisTemplate.delete(batch);
+            }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/product/ProductCacheService.java`
around lines 121 - 129, The scan usage in ProductCacheService (ScanOptions +
masterRedisTemplate.scan(...)) leaks Redis connections because the Cursor isn't
closed and it loads all keys into memory before deleting; change deleteListAll()
to wrap the Cursor returned by masterRedisTemplate.scan(options) in a
try-with-resources to ensure closure, accumulate keys into a small fixed-size
batch (e.g., 1000) and call masterRedisTemplate.delete(batch) repeatedly rather
than collecting all keys into one List, and add a load test that exercises
deleteListAll() with large key counts (e.g., 100k) to measure memory and
latency.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
scripts/add-indexes.sql (2)

21-24: 스크립트 재실행 시 실패 방지를 위해 idempotent하게 변경 권장

현재 CREATE INDEX 문은 인덱스가 이미 존재하면 오류가 발생한다. 운영 환경에서 스크립트를 재실행하거나 장애 복구 시 문제가 될 수 있다. MySQL 8.0 이상이면 다음과 같이 수정할 수 있다:

♻️ idempotent 스크립트로 변경
-CREATE INDEX idx_products_brand_likes   ON products (brand_id, likes_count);
-CREATE INDEX idx_products_brand_price   ON products (brand_id, price);
-CREATE INDEX idx_products_created_at    ON products (created_at);
-CREATE INDEX idx_products_brand_created ON products (brand_id, created_at);
+CREATE INDEX IF NOT EXISTS idx_products_brand_likes   ON products (brand_id, likes_count);
+CREATE INDEX IF NOT EXISTS idx_products_brand_price   ON products (brand_id, price);
+CREATE INDEX IF NOT EXISTS idx_products_created_at    ON products (created_at);
+CREATE INDEX IF NOT EXISTS idx_products_brand_created ON products (brand_id, created_at);

MySQL 5.7 환경이라면 stored procedure 또는 조건부 DDL로 처리해야 한다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/add-indexes.sql` around lines 21 - 24, The CREATE INDEX statements
for idx_products_brand_likes, idx_products_brand_price, idx_products_created_at,
and idx_products_brand_created must be made idempotent to avoid errors on
re-run; update the script to use a safe path for both MySQL 8.0+ and older
versions: for MySQL 8.0+, replace each CREATE INDEX with CREATE INDEX IF NOT
EXISTS ... (or the engine-specific equivalent), and for MySQL 5.7 implement a
conditional check against information_schema (or drop the index if exists then
recreate) before running CREATE INDEX; apply these changes to the four index
definitions so re-running the script won’t fail.

21-21: 좋아요 빈도가 높은 경우 인덱스 유지 비용 모니터링 필요

idx_products_brand_likeslikes_count 컬럼을 포함하므로, LikeFacade.addLike()/removeLike() 호출 시마다 인덱스 재정렬이 발생한다. ProductJpaRepository.incrementLikesCount()가 hot path라면 쓰기 성능에 영향을 줄 수 있다.

운영 관점에서 다음을 모니터링해야 한다:

  • SHOW STATUS LIKE 'Handler_write' 증가율
  • 좋아요 요청 latency p99
  • InnoDB buffer pool hit ratio

캐시 무효화 전략(목록 캐시 전체 삭제)과 함께 동작하므로, 좋아요가 집중되면 캐시 miss + 인덱스 쓰기 부하가 동시에 발생할 수 있다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/add-indexes.sql` at line 21, The composite index
idx_products_brand_likes on products(brand_id, likes_count) causes index
maintenance on every LikeFacade.addLike()/removeLike() and will harm hot-path
writes from ProductJpaRepository.incrementLikesCount(); either remove
likes_count from the index (keep index on brand_id only) or replace the
synchronous counter update with an asynchronous/append-only counter (e.g.,
separate likes_counter table, Redis counter with periodic flush, or background
aggregation) and update queries that relied on the composite ordering
accordingly; additionally add monitoring for Handler_write, p99 like latency,
and InnoDB buffer pool hit ratio to the deployment runbook to detect
regressions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@scripts/add-indexes.sql`:
- Around line 21-24: The current index set in scripts/add-indexes.sql doesn't
cover queries from ProductRepositoryImpl.findProducts() when brandId == null (it
calls findAllByDeletedAtIsNull(pageable)), so add single-column indexes for
price and likes_count (e.g., idx_products_price on price and idx_products_likes
on likes_count) to support ORDER BY price and ORDER BY likes_count without brand
filter; also make all CREATE INDEX statements idempotent by using CREATE INDEX
IF NOT EXISTS (or equivalent conditional creation) so re-running the script
won't fail if an index already exists.

---

Nitpick comments:
In `@scripts/add-indexes.sql`:
- Around line 21-24: The CREATE INDEX statements for idx_products_brand_likes,
idx_products_brand_price, idx_products_created_at, and
idx_products_brand_created must be made idempotent to avoid errors on re-run;
update the script to use a safe path for both MySQL 8.0+ and older versions: for
MySQL 8.0+, replace each CREATE INDEX with CREATE INDEX IF NOT EXISTS ... (or
the engine-specific equivalent), and for MySQL 5.7 implement a conditional check
against information_schema (or drop the index if exists then recreate) before
running CREATE INDEX; apply these changes to the four index definitions so
re-running the script won’t fail.
- Line 21: The composite index idx_products_brand_likes on products(brand_id,
likes_count) causes index maintenance on every LikeFacade.addLike()/removeLike()
and will harm hot-path writes from ProductJpaRepository.incrementLikesCount();
either remove likes_count from the index (keep index on brand_id only) or
replace the synchronous counter update with an asynchronous/append-only counter
(e.g., separate likes_counter table, Redis counter with periodic flush, or
background aggregation) and update queries that relied on the composite ordering
accordingly; additionally add monitoring for Handler_write, p99 like latency,
and InnoDB buffer pool hit ratio to the deployment runbook to detect
regressions.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 9f177336-78d9-4df2-93fe-afc98c9b54ff

📥 Commits

Reviewing files that changed from the base of the PR and between 4724df5 and b949147.

📒 Files selected for processing (1)
  • scripts/add-indexes.sql

Comment on lines +21 to +24
CREATE INDEX idx_products_brand_likes ON products (brand_id, likes_count);
CREATE INDEX idx_products_brand_price ON products (brand_id, price);
CREATE INDEX idx_products_created_at ON products (created_at);
CREATE INDEX idx_products_brand_created ON products (brand_id, created_at);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Description: ProductFacade에서 brandId 없이 호출되는 정렬 조합 확인

# brandId 파라미터가 null인 상태로 정렬 옵션이 사용되는 호출 패턴 확인
rg -n -A5 'getProducts' --type=java | head -50

# findAllByDeletedAtIsNull 메서드 정의 및 사용처 확인
ast-grep --pattern 'findAllByDeletedAtIsNull($$$)'

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 5990


🏁 Script executed:

# Find ProductRepositoryImpl and understand the query patterns
fd 'ProductRepositoryImpl' -t f --type=java

# Also find ProductRepository or ProductJpaRepository
fd 'ProductRepository' -t f --type=java | grep -E '(Impl|Jpa)'

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 428


🏁 Script executed:

# Look at ProductFacade to understand sort option mappings
rg -n 'sort' apps/commerce-api/src/main/java/com/loopers/application/product/ProductFacade.java -A 3 -B 3

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 1495


🏁 Script executed:

# Check ProductService to see how getProducts handles different sort options
rg -n 'getProducts|sort|price|likes' apps/commerce-api/src/main/java/com/loopers/domain/product/ -t java -A 2 | head -100

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 6133


🏁 Script executed:

# Find ProductRepositoryImpl and examine the findProducts method
fd 'ProductRepositoryImpl' --type f

# Also check for ProductRepository interfaces
fd 'ProductRepository' --type f | grep -v test

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 354


🏁 Script executed:

# Search for findProducts implementation
rg -n 'findProducts' apps/commerce-api/src/main -A 10 | head -80

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 4057


🏁 Script executed:

# Look for findAllByDeletedAtIsNull usage
rg -n 'findAllByDeletedAtIsNull' apps/commerce-api/src/main -B 2 -A 5

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 5848


🏁 Script executed:

# Check the SQL script for idempotency (IF NOT EXISTS clause)
cat scripts/add-indexes.sql

Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java

Length of output: 1259


브랜드 필터 없이 가격 또는 좋아요순으로 정렬할 때 인덱스 미보유 및 재실행 시 실패 위험

ProductRepositoryImpl.findProducts()에서 brandId == null일 때 findAllByDeletedAtIsNull(pageable)이 호출되며, 이 경우 복합 인덱스 (brand_id, ...) 는 선택되지 않는다. 현재 인덱스 설계는 다음 쿼리 패턴을 충분히 커버하지 못한다:

  • WHERE deleted_at IS NULL ORDER BY price ASC/DESC → 인덱스 미보유 (idx_products_price 필요)
  • WHERE deleted_at IS NULL ORDER BY likes_count DESC → 인덱스 미보유 (idx_products_likes 필요)

컨트롤러에서 brandId가 선택사항(@RequestParam(required = false))이므로 이 패턴은 실제로 발생하며, 테이블 크기가 증가하면 풀 테이블 스캔으로 인한 성능 저하가 예상된다.

또한 SQL 스크립트의 CREATE INDEXIF NOT EXISTS 절이 없어 인덱스가 이미 존재할 경우 재실행 시 오류가 발생한다. 운영 환경에서의 재적용을 대비하려면 CREATE INDEX IF NOT EXISTS로 변경하거나 조건부 검사 로직이 필요하다.

권장 인덱스 추가 및 수정안
CREATE INDEX IF NOT EXISTS idx_products_brand_likes   ON products (brand_id, likes_count);
CREATE INDEX IF NOT EXISTS idx_products_brand_price   ON products (brand_id, price);
CREATE INDEX IF NOT EXISTS idx_products_created_at    ON products (created_at);
CREATE INDEX IF NOT EXISTS idx_products_brand_created ON products (brand_id, created_at);
-- 브랜드 필터 없는 정렬 쿼리 커버
CREATE INDEX IF NOT EXISTS idx_products_price         ON products (price);
CREATE INDEX IF NOT EXISTS idx_products_likes         ON products (likes_count);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/add-indexes.sql` around lines 21 - 24, The current index set in
scripts/add-indexes.sql doesn't cover queries from
ProductRepositoryImpl.findProducts() when brandId == null (it calls
findAllByDeletedAtIsNull(pageable)), so add single-column indexes for price and
likes_count (e.g., idx_products_price on price and idx_products_likes on
likes_count) to support ORDER BY price and ORDER BY likes_count without brand
filter; also make all CREATE INDEX statements idempotent by using CREATE INDEX
IF NOT EXISTS (or equivalent conditional creation) so re-running the script
won't fail if an index already exists.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant